A hypothesis about the rate of global convergence for optimal methods (Newtons type) in smooth convex optimization
نویسندگان
چکیده
منابع مشابه
Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
The main goal of this paper is to develop uniformly optimal first-order methods for convex programming (CP). By uniform optimality we mean that the first-order methods themselves do not require the input of any problem parameters, but can still achieve the best possible iteration complexity bounds. By incorporating a multi-step acceleration scheme into the well-known bundle-level method, we dev...
متن کاملBundle-type methods uniformly optimal for smooth and nonsmooth convex optimization
The bundle-level method and its certain variants are known to exhibit an optimal rate of convergence, i.e., O(1/ √ t), and also excellent practical performance for solving general non-smooth convex programming (CP) problems. However, this rate of convergence is significantly worse than the optimal one for solving smooth CP problems, i.e., O(1/t). In this paper, we present new bundle-type method...
متن کاملOptimal Newton-type methods for nonconvex smooth optimization problems
We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α−Hölder continuous...
متن کاملOn the iterate convergence of descent methods for convex optimization
We study the iterate convergence of strong descent algorithms applied to convex functions. We assume that the function satisfies a very simple growth condition around its minimizers, and then show that the trajectory described by the iterates generated by any such method has finite length, which proves that the sequence of iterates converge.
متن کاملConvex Programming Methods for Global Optimization
We investigate some approaches to solving nonconvex global optimization problems by convex nonlinear programming methods. We assume that the problem becomes convex when selected variables are fixed. The selected variables must be discrete, or else discretized if they are continuous. We provide a survey of disjunctive programming with convex relaxations, logic-based outer approximation, and logi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computer Research and Modeling
سال: 2018
ISSN: 2076-7633,2077-6853
DOI: 10.20537/2076-7633-2018-10-3-305-314